28 research outputs found

    Exploratory Study of the Privacy Extension for System Theoretic Process Analysis (STPA-Priv) to elicit Privacy Risks in eHealth

    Full text link
    Context: System Theoretic Process Analysis for Privacy (STPA-Priv) is a novel privacy risk elicitation method using a top down approach. It has not gotten very much attention but may offer a convenient structured approach and generation of additional artifacts compared to other methods. Aim: The aim of this exploratory study is to find out what benefits the privacy risk elicitation method STPA-Priv has and to explain how the method can be used. Method: Therefore we apply STPA-Priv to a real world health scenario that involves a smart glucose measurement device used by children. Different kinds of data from the smart device including location data should be shared with the parents, physicians, and urban planners. This makes it a sociotechnical system that offers adequate and complex privacy risks to be found. Results: We find out that STPA-Priv is a structured method for privacy analysis and finds complex privacy risks. The method is supported by a tool called XSTAMPP which makes the analysis and its results more profound. Additionally, we learn that an iterative application of the steps might be necessary to find more privacy risks when more information about the system is available later. Conclusions: STPA-Priv helps to identify complex privacy risks that are derived from sociotechnical interactions in a system. It also outputs privacy constraints that are to be enforced by the system to ensure privacy.Comment: author's post-prin

    Konzepte fĂĽr Datensicherheit und Datenschutz in mobilen Anwendungen

    Get PDF
    Smart Devices und insbesondere Smartphones nehmen eine immer wichtigere Rolle in unserem Leben ein. Aufgrund einer kontinuierlich anwachsenden Akkulaufzeit können diese Geräte nahezu ununterbrochen mitgeführt und genutzt werden. Zusätzlich sorgen stetig günstiger werdende Mobilfunktarife und ansteigende Datenraten dafür, dass den Nutzern mit diesen Geräten eine immerwährende Verbindung zum Internet zur Verfügung steht. Smart Devices sind dadurch nicht mehr reine Kommunikationsmittel sondern ebenfalls Informationsquellen. Darüber hinaus gibt es eine Vielzahl an Anwendungen von Drittanbietern für diese Geräte. Dank der darin verbauten Sensoren, können darauf beispielsweise ortsbasierte Anwendungen, Gesundheitsanwendungen oder Anwendungen für die Industrie 4.0 ausgeführt werden, um nur einige zu nennen. Solche Anwendungen stellen allerdings nicht nur ein großes Nutzen-, sondern zu gleich ein immenses Gefahrenpotential dar. Über die Sensoren können die unterschiedlichsten Kontextdaten erfasst und relativ präzise Rückschlüsse auf den Nutzer gezogen werden. Daher sollte bei diesen Geräten ein besonderes Augenmerk auf die Datensicherheit und insbesondere auf den Datenschutz gelegt werden. Betrachtet man allerdings die bestehenden Datensicherheits- und Datenschutzkomponenten in den aktuell vorherrschenden mobilen Plattformen, so fällt auf, dass keine der Plattformen die speziellen Anforderungen an ein mobiles Datensicherheits- und Datenschutzsystem zufriedenstellend erfüllt. Aus diesem Grund steht im Zentrum der vorliegende Arbeit die Konzeption und Umsetzung neuartiger Datensicherheits- und Datenschutzkonzepte für mobile Anwendungen. Hierfür werden die folgenden fünf Forschungsbeiträge erbracht: [FB1] Bestehende Datensicherheits- und Datenschutzkonzepte werden analysiert, um deren Schwachstellen zu identifizieren. [FB2] Ein kontextsensitives Berechtigungsmodell wird erstellt. [FB3] Das Berechtigungsmodell wird in einem flexiblen Datenschutzsystem konzeptionell eingebettet und anschließend implementiert. [FB4] Das Datenschutzsystem wird zu einem holistischen Sicherheitssystem erweitert. [FB5] Das daraus entstandene holistische Sicherheitssystem wird evaluiert. Um die Forschungsziele zu erreichen, wird mit dem Privacy Policy Model (PPM) ein gänzlich neues Modell zur Formulierung von feingranularen Berechtigungsregeln eingeführt, die es dem Nutzer ermöglichen, je nach Bedarf, einzelne Funktionseinheiten einer Anwendung zu deaktivieren, um dadurch die Zugriffsrechte der Anwendung einzuschränken. Zusätzlich kann der Nutzer auch die Genauigkeit der Daten, die der Anwendung zur Verfügung gestellt werden, reduzieren. Das PPM wird in der Privacy Policy Platform (PMP) implementiert. Die PMP ist ein Berechtigungssystem, das nicht nur für die Einhaltung der Datenschutzrichtlinien sorgt, sondern auch einige der Schutzziele der Datensicherheit erfüllt. Für die PMP werden mehrere Implementierungsstrategien diskutiert und deren Vor- und Nachteile gegeneinander abgewogen. Um neben den Datenschutz auch die Datensicherheit gewährleisten zu können, wird die PMP um den Secure Data Container (SDC) erweitert. Mit dem SDC können sensible Daten sicher gespeichert und zwischen Anwendungen ausgetauscht werden. Die Anwendbarkeit der PMP und des SDCs wird an Praxisbeispielen aus vier unterschiedlichen Domänen (ortsbasierte Anwendungen, Gesundheitsanwendungen, Anwendungen in der Industrie 4.0 und Anwendungen für das Internet der Dinge) demonstriert. Bei dieser Analyse zeigt sich, dass die Kombination aus PMP und SDC nicht nur sämtliche Schutzziele, die im Rahmen der vorliegenden Arbeit relevant sind und sich am ISO-Standard ISO/IEC 27000:2009 orientieren, erfüllt, sondern darüber hinaus sehr performant ist. Durch die Verwendung der PMP und des SDCs kann der Akkuverbrauch von Anwendungen halbiert werden

    Special Issue on Security and Privacy in Blockchains and the IoT

    No full text
    The increasing digitalization in all areas of life is leading step-by-step to a data-driven society [...

    Data Is the New Oil–Sort of: A View on Why This Comparison Is Misleading and Its Implications for Modern Data Administration

    No full text
    Currently, data are often referred to as the oil of the 21st century. This comparison is not only used to express that the resource data are just as important for the fourth industrial revolution as oil was for the technological revolution in the late 19th century. There are also further similarities between these two valuable resources in terms of their handling. Both must first be discovered and extracted from their sources. Then, the raw materials must be cleaned, preprocessed, and stored before they can finally be delivered to consumers. Despite these undeniable similarities, however, there are significant differences between oil and data in all of these processing steps, making data a resource that is considerably more challenging to handle. For instance, data sources, as well as the data themselves, are heterogeneous, which means there is no one-size-fits-all data acquisition solution. Furthermore, data can be distorted by the source or by third parties without being noticed, which affects both quality and usability. Unlike oil, there is also no uniform refinement process for data, as data preparation should be tailored to the subsequent consumers and their intended use cases. With regard to storage, it has to be taken into account that data are not consumed when they are processed or delivered to consumers, which means that the data volume that has to be managed is constantly growing. Finally, data may be subject to special constraints in terms of distribution, which may entail individual delivery plans depending on the customer and their intended purposes. Overall, it can be concluded that innovative approaches are needed for handling the resource data that address these inherent challenges. In this paper, we therefore study and discuss the relevant characteristics of data making them such a challenging resource to handle. In order to enable appropriate data provisioning, we introduce a holistic research concept from data source to data sink that respects the processing requirements of data producers as well as the quality requirements of data consumers and, moreover, ensures a trustworthy data administration

    Trustworthy, Secure, and Privacy-aware Food Monitoring Enabled by Blockchains and the IoT

    No full text
    A large number of food scandals (e. g., falsely declared meat or non-compliance with hygiene regulations) are causing considerable concern to consumers. Although Internet of Things (IoT) technologies are used in the food industry to monitor production (e. g., for tracing the origin of meat or monitoring cold chains), the gathered data are not used to provide full transparency to the consumer. To achieve this, however, three aspects must be considered: a) The origin of the data must be verifiable, i. e., it must be ensured that the data originate from calibrated sensors. b) The data must be stored tamper-resistant, immutable, and open to all consumers. c) Despite this openness, the privacy of affected data subjects (e. g., the carriers) must still be protected. To this end, we introduce the SHEEPDOG architecture that “shepherds” products from production to purchase to enable a trustworthy, secure, and privacy-aware food monitoring. In SHEEPDOG, attribute-based credentials ensure trustworthy data acquisition, blockchain technologies provide secure data storage, and fine-grained access control enables privacy-aware data provision

    Introducing the enterprise data marketplace: a platform for democratizing company data

    No full text
    Abstract In this big data era, multitudes of data are generated and collected which contain the potential to gain new insights, e.g., for enhancing business models. To leverage this potential through, e.g., data science and analytics projects, the data must be made available. In this context, data marketplaces are used as platforms to facilitate the exchange and thus, the provisioning of data and data-related services. Data marketplaces are mainly studied for the exchange of data between organizations, i.e., as external data marketplaces. Yet, the data collected within a company also has the potential to provide valuable insights for this same company, for instance to optimize business processes. Studies indicate, however, that a significant amount of data within companies remains unused. In this sense, it is proposed to employ an Enterprise Data Marketplace, a platform to democratize data within a company among its employees. Specifics of the Enterprise Data Marketplace, how it can be implemented or how it makes data available throughout a variety of systems like data lakes has not been investigated in literature so far. Therefore, we present the characteristics and requirements of this kind of marketplace. We also distinguish it from other tools like data catalogs, provide a platform architecture and highlight how it integrates with the company’s system landscape. The presented concepts are demonstrated through an Enterprise Data Marketplace prototype and an experiment reveals that this marketplace significantly improves the data consumer workflows in terms of efficiency and complexity. This paper is based on several interdisciplinary works combining comprehensive research with practical experience from an industrial perspective. We therefore present the Enterprise Data Marketplace as a distinct marketplace type and provide the basis for establishing it within a company

    Query Processing in Blockchain Systems: Current State and Future Challenges

    No full text
    When, in 2008, Satoshi Nakamoto envisioned the first distributed database management system that relied on cryptographically secured chain of blocks to store data in an immutable and tamper-resistant manner, his primary use case was the introduction of a digital currency. Owing to this use case, the blockchain system was geared towards efficient storage of data, whereas the processing of complex queries, such as provenance analyses of data history, is out of focus. The increasing use of Internet of Things technologies and the resulting digitization in many domains, however, have led to a plethora of novel use cases for a secure digital ledger. For instance, in the healthcare sector, blockchain systems are used for the secure storage and sharing of electronic health records, while the food industry applies such systems to enable a reliable food-chain traceability, e.g., to prove compliance with cold chains. In these application domains, however, querying the current state is not sufficient—comprehensive history queries are required instead. Due to these altered usage modes involving more complex query types, it is questionable whether today’s blockchain systems are prepared for this type of usage and whether such queries can be processed efficiently by them. In our paper, we therefore investigate novel use cases for blockchain systems and elicit their requirements towards a data store in terms of query capabilities. We reflect the state of the art in terms of query support in blockchain systems and assess whether it is capable of meeting the requirements of such more sophisticated use cases. As a result, we identify future research challenges with regard to query processing in blockchain systems

    Maritime Anomaly Detection for Vessel Traffic Services: A Survey

    No full text
    A Vessel Traffic Service (VTS) plays a central role in maritime traffic safety. Regulations are given by the International Maritime Organization (IMO) and Guidelines by the International Association of Marine Aids to Navigation and Lighthouse Authorities (IALA). Accordingly, VTS facilities utilize communication and sensor technologies such as an Automatic Identification System (AIS), radar, radio communication and others. Furthermore, VTS operators are motivated to apply Decision Support Tools (DST), since these can reduce workloads and increase safety. A promising type of DST is anomaly detection. This survey presents an overview of state-of-the-art approaches of anomaly detection for the surveillance of maritime traffic. The approaches are characterized in the context of VTS and, thus, most notably, sorted according to utilized communication and sensor technologies, addressed anomaly types and underlying detection techniques. On this basis, current trends as well as open research questions are deduced

    SMARTEN—A Sample-Based Approach towards Privacy-Friendly Data Refinement

    No full text
    Two factors are crucial for the effective operation of modern-day smart services: Initially, IoT-enabled technologies have to capture and combine huge amounts of data on data subjects. Then, all these data have to be processed exhaustively by means of techniques from the area of big data analytics. With regard to the latter, thorough data refinement in terms of data cleansing and data transformation is the decisive cornerstone. Studies show that data refinement reaches its full potential only by involving domain experts in the process. However, this means that these experts need full insight into the data in order to be able to identify and resolve any issues therein, e.g., by correcting or removing inaccurate, incorrect, or irrelevant data records. In particular for sensitive data (e.g., private data or confidential data), this poses a problem, since these data are thereby disclosed to third parties such as domain experts. To this end, we introduce SMARTEN, a sample-based approach towards privacy-friendly data refinement to smarten up big data analytics and smart services. SMARTEN applies a revised data refinement process that fully involves domain experts in data pre-processing but does not expose any sensitive data to them or any other third-party. To achieve this, domain experts obtain a representative sample of the entire data set that meets all privacy policies and confidentiality guidelines. Based on this sample, domain experts define data cleaning and transformation steps. Subsequently, these steps are converted into executable data refinement rules and applied to the entire data set. Domain experts can request further samples and define further rules until the data quality required for the intended use case is reached. Evaluation results confirm that our approach is effective in terms of both data quality and data privacy

    SMARTEN—A Sample-Based Approach towards Privacy-Friendly Data Refinement

    No full text
    Two factors are crucial for the effective operation of modern-day smart services: Initially, IoT-enabled technologies have to capture and combine huge amounts of data on data subjects. Then, all these data have to be processed exhaustively by means of techniques from the area of big data analytics. With regard to the latter, thorough data refinement in terms of data cleansing and data transformation is the decisive cornerstone. Studies show that data refinement reaches its full potential only by involving domain experts in the process. However, this means that these experts need full insight into the data in order to be able to identify and resolve any issues therein, e.g., by correcting or removing inaccurate, incorrect, or irrelevant data records. In particular for sensitive data (e.g., private data or confidential data), this poses a problem, since these data are thereby disclosed to third parties such as domain experts. To this end, we introduce SMARTEN, a sample-based approach towards privacy-friendly data refinement to smarten up big data analytics and smart services. SMARTEN applies a revised data refinement process that fully involves domain experts in data pre-processing but does not expose any sensitive data to them or any other third-party. To achieve this, domain experts obtain a representative sample of the entire data set that meets all privacy policies and confidentiality guidelines. Based on this sample, domain experts define data cleaning and transformation steps. Subsequently, these steps are converted into executable data refinement rules and applied to the entire data set. Domain experts can request further samples and define further rules until the data quality required for the intended use case is reached. Evaluation results confirm that our approach is effective in terms of both data quality and data privacy
    corecore